53 research outputs found

    A New Approach to Laboratory Motor Control: MMCS the Modular Motor Control System

    Get PDF
    Many projects within the GRASP laboratory involve motion control via electric servo motors, for example robots, hands, camera mounts and tables. To date each project has been based on a unique hardware/software approach. This document discusses the development of a new modular, and host independent, motor control system, MMCS, for laboratory use. The background to the project and the development of the concept is traced. An important hardware component developed is a 2 axis control motor control board that can be plugged into an IBM PC bus or connected via an adaptor to a high performance workstation computer. To eliminate the need for detailed understanding of the hardware components, an abstract controller model is proposed. Software implementing this model has been developed in a device driver for the Unix operating system. However for those who need or wish to program at the hardware level, the manual describes in detail the various custom hardware components of the system

    Tuning Modular Networks with Weighted Losses for Hand-Eye Coordination

    Get PDF
    This paper introduces an end-to-end fine-tuning method to improve hand-eye coordination in modular deep visuo-motor policies (modular networks) where each module is trained independently. Benefiting from weighted losses, the fine-tuning method significantly improves the performance of the policies for a robotic planar reaching task.Comment: 2 pages, to appear in the Deep Learning for Robotic Vision (DLRV) Workshop in CVPR 201

    Fast Image Segmentation

    Get PDF
    Image segmentation remains one of the greatest problems in machine vision. The technique described here takes an image and a geometric description of the object required, determines multiple binary thresholds to segment the image, and combines the information from the appropriate thresholds. By utilizing region-growing hardware it is possible to achieve segmentation in less than 2 seconds

    Video Rate Visual Servoing for Robots

    Get PDF
    This paper presents some preliminary experimental results in robotic visual servoing, utilizing a newly available hardware region-growing and moment-generation unit. A Unix-based workstation in conjunction with special purpose video processing hardware has been used to visually close the robot position loop at video field rate, 60Hz. The architecture and capabilities of the system are discussed. Performance of the closed-loop position control is investigated analytically and via step response tests, and experimental results are presented. Initial results are for 2 dimensional servoing, but extensions to 3 dimensional positioning are covered along with methods for monocular distance determination. Finally some limitations of the approach and areas for further work are covered

    Active text perception for mobile robots

    Get PDF
    Our everyday environment is full of text but this rich source of information remains largely inaccessible to mobile robots. In this paper we describe an active text spotting system that uses a small number of wide angle views to locate putative text in the environment and then foveates and zooms onto that text in order to improve the reliability of text recognition. We present extensive experimental results obtained with a pan/tilt/zoom camera and a ROS-based mobile robot operating in an indoor environment

    Computer Vision Based Collision Avoidance for UAVs

    Get PDF
    This research is investigating the feasibility of using computer vision to provide robust sensing capabilities suitable for the purpose of UAV collision avoidance. Presented in this paper is a preliminary strategy for detecting collision-course aircraft from image sequences and a discussion on its performance in processing a real-life data set. Initial trials were conducted on image streams featuring real collision-course aircraft against a variety of daytime backgrounds. A morphological filtering approach was implemented and used to extract target features from background clutter. Detection performance in images with low signal to noise ratios was improved by averaging image features over multiple frames, using dynamic programming to account for target motion. Preliminary analysis of the initial data set has yielded encouraging results, demonstrating the ability of the algorithm to detect targets even in situations where visibility to the human eye was poor

    Image Processing Algorithms for UAV 'Sense and Avoid'

    Get PDF
    This research is investigating the feasibility of using computer vision to provide a level of situational awareness suitable for the task of UAV "sense and avoid." This term is used to describe the capacity of a UAV to detect airborne traffic and respond with appropriate avoidance maneuvers in order to maintain minimum separation distances. As reflected in regulatory requirements such as FAA Order 7610.4, this capability must demonstrate a level of performance which meets or exceeds that of an equivalent human pilot. Presented in this paper is a comparison between two initial image processing algorithms that have been designed to detect small, point-like features (potentially corresponding to distant, collision course aircraft), from image streams and a discussion of their performance in processing a real-life collision scenario. This performance is compared against the stated benchmark of equivalent human performance, specifically the measured detection times of an alerted human observer. The two algorithms were used to process a series of image featuring real collision course aircraft against a variety of daytime backgrounds. Preliminary analysis of this data set has yielded encouraging results, achieving first detection times at distances of approximately 6.5km (3.5nmi), which are 35-40% greater than those of an alerted human observer. Comparisons were also drawn between the two separate detection algorithms, and have demonstrated that a new approach designed to increase resilience to image noise achieves a lower rate of false alarms, particularly in tests featuring more sensitive detection thresholds

    Spherical image-based visual servo and structure estimation

    No full text
    Abstract-This paper presents a formulation of image-based visual servoing (IBVS) for a spherical camera where coordinates are parameterized in terms of colatitude and longitude: IBVSSph. The image Jacobian is derived and simulation results are presented for canonical rotational, translational as well as general motion. Problems with large rotations that affect the planar perspective form of IBVS are not present on the sphere, whereas the desirable robustness properties of IBVS are shown to be retained. We also describe a structure from motion (SfM) system based on camera-centric spherical coordinates and show how a recursive estimator can be used to recover structure. The spherical formulations for IBVS and SfM are particularly suitable for platforms, such as aerial and underwater robots, that move in SE(3)

    Dynamic Issues in Robot Visual-Servo Systems

    No full text
    This paper poses a number of questions related to the performance and structure of closed-loop visual control, or visual servo, systems. While the fundamentals of visual servo control are well known and systems have been demonstrated for many years, the achieved performance is far less than could be expected. In particular the questions discussed relate to fundamental control system structure, performance metrics, compensator design and the choice of feedback versus feedforward control. 1 Introduction The use of visual sensors with robot manipulators has a history of over two decades, dating back to early work with block worlds. However the performance of modern closed-loop visual control, or visual-servo, systems is significantly poorer than would be expected from a control systems point of view. In this paper it is argued that the reason for such poor performance is that the dynamics of the system, vision and robot, have been ignored. To this end, a distinction is introduced between..

    In situ Measurement of Robot Motor Electrical Constants

    No full text
    Motor torque constant is an important parameter in modeling and controlling a robot axis. In practice this parameter can vary considerably from the manufacturer's specification, if available, and this makes it desirable to characterise individual motors. Traditional techniques require that the motor be removed from the robot for testing, or that an elaborate technique involving weights and pulleys be employed. This paper describes a novel method for measuring the torque constant of robot servo motors in situ and is based on the equivalence of motor torque and back EMF constants. It requires a very simple experimental procedure, utilizes existing axis position sensors, and eliminates effects due to static friction and joint cross coupling. A straightforward extension to this approach can provide a measurement of motor armature impedance. Experimental results obtained for a Puma 560 are discussed and compared with other published results. 1 Introduction A large number of existing robot m..
    corecore